Learning continuous image representations is recently gaining popularity for image super-resolution (SR) because of its ability to reconstruct high-resolution images with arbitrary scales from low-resolution inputs. Existing methods mostly ensemble nearby features to predict the new pixel at any queried coordinate in the SR image. Such a local ensemble suffers from some limitations: i) it has no learnable parameters and it neglects the similarity of the visual features; ii) it has a limited receptive field and cannot ensemble relevant features in a large field which are important in an image; iii) it inherently has a gap with real camera imaging since it only depends on the coordinate. To address these issues, this paper proposes a continuous implicit attention-in-attention network, called CiaoSR. We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features. Furthermore, we embed a scale-aware attention in this implicit attention network to exploit additional non-local information. Extensive experiments on benchmark datasets demonstrate CiaoSR significantly outperforms the existing single image super resolution (SISR) methods with the same backbone. In addition, the proposed method also achieves the state-of-the-art performance on the arbitrary-scale SR task. The effectiveness of the method is also demonstrated on the real-world SR setting. More importantly, CiaoSR can be flexibly integrated into any backbone to improve the SR performance.
translated by 谷歌翻译
Depth map super-resolution (DSR) has been a fundamental task for 3D computer vision. While arbitrary scale DSR is a more realistic setting in this scenario, previous approaches predominantly suffer from the issue of inefficient real-numbered scale upsampling. To explicitly address this issue, we propose a novel continuous depth representation for DSR. The heart of this representation is our proposed Geometric Spatial Aggregator (GSA), which exploits a distance field modulated by arbitrarily upsampled target gridding, through which the geometric information is explicitly introduced into feature aggregation and target generation. Furthermore, bricking with GSA, we present a transformer-style backbone named GeoDSR, which possesses a principled way to construct the functional mapping between local coordinates and the high-resolution output results, empowering our model with the advantage of arbitrary shape transformation ready to help diverse zooming demand. Extensive experimental results on standard depth map benchmarks, e.g., NYU v2, have demonstrated that the proposed framework achieves significant restoration gain in arbitrary scale depth map super-resolution compared with the prior art. Our codes are available at https://github.com/nana01219/GeoDSR.
translated by 谷歌翻译
Neural Radiance Field (NeRF) has revolutionized free viewpoint rendering tasks and achieved impressive results. However, the efficiency and accuracy problems hinder its wide applications. To address these issues, we propose Geometry-Aware Generalized Neural Radiance Field (GARF) with a geometry-aware dynamic sampling (GADS) strategy to perform real-time novel view rendering and unsupervised depth estimation on unseen scenes without per-scene optimization. Distinct from most existing generalized NeRFs, our framework infers the unseen scenes on both pixel-scale and geometry-scale with only a few input images. More specifically, our method learns common attributes of novel-view synthesis by an encoder-decoder structure and a point-level learnable multi-view feature fusion module which helps avoid occlusion. To preserve scene characteristics in the generalized model, we introduce an unsupervised depth estimation module to derive the coarse geometry, narrow down the ray sampling interval to proximity space of the estimated surface and sample in expectation maximum position, constituting Geometry-Aware Dynamic Sampling strategy (GADS). Moreover, we introduce a Multi-level Semantic Consistency loss (MSC) to assist more informative representation learning. Extensive experiments on indoor and outdoor datasets show that comparing with state-of-the-art generalized NeRF methods, GARF reduces samples by more than 25\%, while improving rendering quality and 3D geometry estimation.
translated by 谷歌翻译
Recent years we have witnessed rapid development in NeRF-based image rendering due to its high quality. However, point clouds rendering is somehow less explored. Compared to NeRF-based rendering which suffers from dense spatial sampling, point clouds rendering is naturally less computation intensive, which enables its deployment in mobile computing device. In this work, we focus on boosting the image quality of point clouds rendering with a compact model design. We first analyze the adaption of the volume rendering formulation on point clouds. Based on the analysis, we simplify the NeRF representation to a spatial mapping function which only requires single evaluation per pixel. Further, motivated by ray marching, we rectify the the noisy raw point clouds to the estimated intersection between rays and surfaces as queried coordinates, which could avoid \textit{spatial frequency collapse} and neighbor point disturbance. Composed of rasterization, spatial mapping and the refinement stages, our method achieves the state-of-the-art performance on point clouds rendering, outperforming prior works by notable margins, with a smaller model size. We obtain a PSNR of 31.74 on NeRF-Synthetic, 25.88 on ScanNet and 30.81 on DTU. Code and data are publicly available at https://github.com/seanywang0408/RadianceMapping.
translated by 谷歌翻译
人类注释是不完美的,尤其是在初级实践者生产的时候。多专家共识通常被认为是黄金标准,而这种注释协议太昂贵了,无法在许多现实世界中实施。在这项研究中,我们提出了一种完善人类注释的方法,称为神经注释细化(接近)。它基于可学习的隐式函数,该函数将潜在向量解码为表示形状。通过将外观整合为隐式函数的输入,可以固定注释人工制品的外观可见。我们的方法在肾上腺分析的应用中得到了证明。我们首先表明,可以在公共肾上腺细分数据集上修复扭曲的金标准。此外,我们开发了一个新的肾上腺分析(ALAN)数据集,其中拟议的附近,每个病例都由专家分配的肾上腺及其诊断标签(正常与异常)组成。我们表明,经过近距离修复的形状训练的型号比原始的肾上腺更好地诊断肾上腺。 Alan数据集将是开源的,具有1,594个用于肾上腺诊断的形状,它是医学形状分析的新基准。代码和数据集可在https://github.com/m3dv/near上找到。
translated by 谷歌翻译
来自光学相干断层扫描(OCT)B扫描的投影图(PM)是诊断视网膜疾病的重要工具,这通常需要视网膜层分割。在这项研究中,我们提出了一个新颖的端到端框架,以预测B扫描的PM。我们没有明确地将视网膜层分割,而是将它们隐式表示为预测的坐标。通过在视网膜层之间均匀采样的坐标上进行像素插值,可以通过合并轻松获得相应的PMS。值得注意的是,所有操作员都是可区分的。因此,这种可区分的投影模块(DPM)可以通过PMS的基础真理而不是视网膜层分割来实现端到端训练。我们的框架产生了高质量的PM,明显超过了基线,包括没有DPM的香草CNN和基于优化的DPM,而没有深层之前。此外,拟议的DPM是曲线/表面之间区域/体积的新神经表示,对于几何深度学习可能具有独立的兴趣。
translated by 谷歌翻译
在图像中恢复任意缺失区域的合理和现实内容是一个重要而挑战性的任务。尽管最近的图像批量模型在生动的视觉细节方面取得了重大进展,但它们仍然可以导致纹理模糊或由于在处理更复杂的场景时由于上下文模糊而导致的结构扭曲。为了解决这个问题,我们提出了通过学习来自特定借口任务的多尺度语义代理的想法激励的语义金字塔网络(SPN)可以大大使图像中局部缺失内容的恢复极大地利益。 SPN由两个组件组成。首先,它将语义前视图从托管模型蒸馏到多尺度特征金字塔,实现对全局背景和局部结构的一致了解。在现有的学习者内,我们提供了一个可选模块,用于变分推理,以实现由各种学习的前沿驱动的概率图像染色。 SPN的第二组件是完全上下文感知的图像生成器,其在与(随机)先前金字塔一起自适应地和逐渐地改进低级视觉表示。我们将先前的学习者和图像发生器培训为统一模型,而无需任何后处理。我们的方法在多个数据集中实现了本领域的最先进,包括在确定性和概率的侵略设置下,包括Parket2,Paris Streetview,Celeba和Celeba-HQ。
translated by 谷歌翻译
人员搜索旨在共同本地化和识别来自自然的查询人员,不可用的图像,这在过去几年中在计算机视觉社区中积极研究了这一图像。在本文中,我们将在全球和本地围绕目标人群的丰富的上下文信息中阐述,我们分别指的是场景和组上下文。与以前的作品单独处理这两种类型的作品,我们将它们利用统一的全球本地上下文网络(GLCNet),其具有直观的功能增强。具体地,以多级方式同时增强重新ID嵌入和上下文特征,最终导致人员搜索增强,辨别特征。我们对两个人搜索基准(即Cuhk-Sysu和PRW)进行实验,并将我们的方法扩展到更具有挑战性的环境(即,在MovieIenet上的字符搜索)。广泛的实验结果表明,在三个数据集上的最先进方法中提出的GLCNET的一致性改进。我们的源代码,预先训练的型号,以及字符搜索的新设置可以:https://github.com/zhengpeng7/llcnet。
translated by 谷歌翻译
域概括人员重新识别旨在将培训的模型应用于未经看明域。先前作品将所有培训域中的数据组合以捕获域不变的功能,或者采用专家的混合来调查特定域的信息。在这项工作中,我们争辩说,域特定和域不变的功能对于提高重新ID模型的泛化能力至关重要。为此,我们设计了一种新颖的框架,我们命名为两流自适应学习(tal),同时模拟这两种信息。具体地,提出了一种特定于域的流以捕获具有批量归一化(BN)参数的训练域统计,而自适应匹配层被设计为动态聚合域级信息。同时,我们在域不变流中设计一个自适应BN层,以近似各种看不见域的统计信息。这两个流自适应地和协作地工作,以学习更广泛的重新ID功能。我们的框架可以应用于单源和多源域泛化任务,实验结果表明我们的框架显着优于最先进的方法。
translated by 谷歌翻译
我们考虑将人体网格重建模型调整为域外流媒体视频的新问题,其中现有的基于SMPL的模型的性能受到不同相机参数,骨长,背景和闭塞的分布换档的显着影响。我们通过在线适应来解决这个问题,逐渐在测试期间纠正模型偏差。有两个主要挑战:首先,缺乏3D注释增加了培训难度并导致3D模糊。其次,非静止数据分布使得难以在拟合常规帧和硬样之间的平衡,具有严重的闭塞或戏剧性的变化。为此,我们提出了动态Bilevel在线适应算法(Dynaboa)。它首先介绍了用于补偿不可用的3D注释的时间约束,并利用BileVel优化过程来解决多目标之间的冲突。 Dynaboa通过使用类似的来源示例提供了额外的3D指导,尽管分布换档。此外,它可以自适应地调整各个帧上的​​优化步骤的数量,以完全适合硬样品并避免过度拟合常规帧。 Dynaboa在三个域名人网格重建基准上实现最先进的结果。
translated by 谷歌翻译